Lexicalizing and Combining
نویسنده
چکیده
ing away from the specific content of CAESAR yields the monadic concept ARRIVED(X).Starting with SAW(CAESAR, BRUTUS) and abstracting away from the contents of both saturatingconcepts yields the dyadic concept SAW(X, Y). I assume that concepts have contents, which neednot be linguistic meanings. I follow the usual conventions of using small capitals to indicateconcepts, with variables (‘x’, ‘y’, ...) indicating the number and logical order of saturaters:SAW(CAESAR, BRUTUS) implies that Caesar saw Brutus; SAW(X, BRUTUS) is a monadic concept that applies to anything that saw Brutus, while SAW(CAESAR, Y) is a monadic concept that appliesto any entity that Caesar saw. But as discussed below, I do not assume that the contents ofunsaturated concepts are functions, or that ARRIVED(CAESAR) denotes the value of some functionwith Caesar in its domain. 3 I assume that talk of lexical items expressing concepts is to be understood, eventually, in termsof how concepts are indicated in speech and/or accessed in comprehension. But I do not assumethat each lexical item λ is paired with a single concept C: if only because of polysemy, and thepossibility of different perspectives on the things thinkers think about, a speaker might indicateone concept with a word that fetches a related but distinct concept in a hearer. For me, sayingthat λ expresses C is a simple of saying that λ is linked, in a special indicating/fetching way, toone or more concepts that share a certain form and perhaps a common root; see section two. 4 This divergence can be described in terms of “covert” movement or type-adjustment; see, e.g.,May (1985) and Jacobson (1999). 5 But if events of arriving are not independent of arrivers, no value of the variable inARRIVE(E, BRUTUS) is independent of Brutus, and so ARRIVE(E, X) is not a concept of a genuinerelation. Compare AFTER(E, F), ABOVE(X, Y), and ARRIVE-AT(T, X), whose first variable rangesover times, which are independent of arrivers. Likewise, while SEE(E, X, Y) is formally triadic, the corresponding relation does not hold among three independent entities. In this sense,hypothesizing that verbs indicate concepts like ARRIVE(E, X) and SEE(E, X, Y)—as opposed toARRIVED(X) and SAW(X, Y)—adds one to the posited adicities, allowing for adverbialmodification of event variables, without changing much else. 6 If the adverbial phrases correspond to conjuncts of a complex monadic concept, closed byexistential quantification, the valid inferences are instances of conjunction reduction:∃E[Φ(E) & Ψ(E) & Δ(E)] implies ∃E[Φ(E) & Ψ(E)], which implies ∃E[Φ(E)]. But an instance of ∃E[Φ(E) & Ψ(E) & Δ(E)] & ∃E[Φ(E) & Γ(E) & Θ(E)] need not imply ∃E[Φ(E) & Ψ(E) & Θ(E)] or ∃E[Φ(E) & Δ(E) & Γ(E)]. See Taylor (1985), expounding an argument due to Gareth Evans. The example also shows that values of event variables are not ordered n-tuples consisting ofparticipants and a moment in time; a sharp hit (of y by x) with a red stick can occur at the sametime as a soft hit with blue stick.
منابع مشابه
Ordering Phrases with Function Words
This paper presents a Function Word centered, Syntax-based (FWS) solution to address phrase ordering in the context of statistical machine translation (SMT). Motivated by the observation that function words often encode grammatical relationship among phrases within a sentence, we propose a probabilistic synchronous grammar to model the ordering of function words and their left and right argumen...
متن کاملTransition-based Neural Constituent Parsing
Constituent parsing is typically modeled by a chart-based algorithm under probabilistic context-free grammars or by a transition-based algorithm with rich features. Previous models rely heavily on richer syntactic information through lexicalizing rules, splitting categories, or memorizing long histories. However enriched models incur numerous parameters and sparsity issues, and are insufficient...
متن کاملA Formal Look at Dependency Grammars and Phrase-Structure Grammars, with Special Consideration of Word-Order Phenomena
The central role of the lexicon in Meaning-Text Theory (MTT) and other dependency-based linguistic theories cannot be replicated in linguistic theories based on context-free grammars (CFGs). We describe Tree Adjoining Grammar (TAG) as a system that arises naturally in the process of lexicalizing CFGs. A TAG grammar can therefore be compared directly to an Meaning-Text Model (MTM). We illustrate...
متن کاملLexicalizing DBpedia with Realization Enabled Ensemble Architecture: RealText-lex2 Approach
DBpedia encodes massive amounts of open domain knowledge and is growing by accumulating more triples at the same rate as Wikipedia. However, the applications often require natural language formulations of these triples to present the information as a natural text. The RealTextlex2 framework offers a scalable platform to transform these triples to natural language sentences using lexicalization ...
متن کاملLexicalizing an Ontology
Rich lexica such as WordNet are valuable resources for information extraction from unstructured text. When extraction techniques have formal ontologies as their targets, a mapping from the lexicon to the ontology has been shown to be beneficial in sense disambiguation and usability of the extracted knowledge. Such mappings are generally established manually, which can be a costly procedure if e...
متن کاملAmbiguity Resolution for Machine Translation of Telegraphic Messages
Telegraphic messages with numerous instances of omission pose a new challenge to parsing in that a sentence with omission causes a higher degree of ambi6uity than a sentence without omission. Misparsing reduced by omissions has a far-reaching consequence in machine translation. Namely, a misparse of the input often leads to a translation into the target language which has incoherent meaning in ...
متن کامل